8 research outputs found

    Anchor-free SAR Ship Instance Segmentation with Centroid-distance Based Loss

    No full text
    Instance segmentation methods for synthetic aperture radar (SAR) ship imaging have certain unsolved problems. 1) Most of the anchor-based detection algorithms encounter difficulties in tuning the anchor-related parameters and high computational costs. 2) Different tasks share the same features without considering the differences between tasks, leading to mismatching of the shared features and inconsistent training targets. 3) Common loss functions for instance segmentation cannot effectively distinguish the positional relationships between ships with the same degree of overlap. In order to alleviate these problems, we first adopt a lightweight feature extractor and an anchor-free convolutional network, which effectively help to reduce computational consumption and model complexity. Second, to fully disseminate feature information, a dynamic encoder–decoder is proposed to dynamically transform the shared features to task-specific features in channel and spatial dimensions. Third, a novel loss function based on centroid distance is designed to make full use of the geometrical shape and positional relationship between SAR ship targets. In order to better extract features from SAR images in complex scenes, we further propose the dilated convolution enhancement module, which utilizes multiple receptive fields to take full advantage of the shallow feature information. Experiments conducted on the SAR ship detection dataset prove that the method proposed in this article is superior to the other state-of-the-art algorithms in terms of instance segmentation accuracy and model complexity

    Learning the Precise Feature for Cluster Assignment

    Full text link
    Clustering is one of the fundamental tasks in com-puter vision and pattern recognition. Recently, deep clusteringmethods (algorithms based on deep learning) have attractedwide attention with their impressive performance. Most ofthese algorithms combine deep unsupervised feature learningand standard clustering together. However, the separation offeature extraction and clustering will lead to suboptimal solutionsbecause the two-stage strategy prevents representation learningfrom adapting to subsequent tasks (e.g., clustering according tospecific cues). To overcome this issue, efforts have been made inthe dynamic adaption of representation and cluster assignment,whereas current state-of-the-art methods suffer from heuristicallyconstructed objectives with representation and cluster assignmentalternatively optimized. To further standardize the clusteringproblem, we formulate the objective of clustering as findinga precise feature as the cue for cluster assignment. Based onthis, we propose a general-purpose deep clustering frameworkwhich radically integrate representation learning and clusteringinto an individual pipeline for the first time. The proposedframework exploits the powerful ability of recently progressedgenerative models for learning intrinsic features, and imposes anentropy minimization on the distribution of cluster assignmentby a variational algorithm. Experimental results show that theperformance of our method is superior, or at least comparable to,the state-of-the-art methods on the handwritten digit recognition,race recognition and object recognition benchmark datasets

    A Novel Semi-supervised learning Method Based on Fast Search and Density Peaks

    Get PDF
    Radar image recognition is a hotspot in the field of remote sensing. Under the condition of sufficiently labeled samples, recognition algorithms can achieve good classification results. However, labeled samples are scarce and costly to obtain. Our major interest in this paper is how to use these unlabeled samples to improve the performance of a recognition algorithm in the case of limited labeled samples. This is a semi-supervised learning problem. However, unlike the existing semi-supervised learning methods, we do not use unlabeled samples directly and, instead, look for safe and reliable unlabeled samples before using them. In this paper, two new semi-supervised learning methods are proposed: a semi-supervised learning method based on fast search and density peaks (S2DP) and an iterative S2DP method (IS2DP). When the labeled samples satisfy a certain requirement, S2DP uses fast search and a density peak clustering method to detect reliable unlabeled samples based on the weighted kernel Fisher discriminant analysis (WKFDA). Then, a labeling method based on clustering information (LCI) is designed to label the unlabeled samples. When the labeled samples are insufficient, IS2DP is used to iteratively search for reliable unlabeled samples for semi-supervision. Then, these samples are added to the labeled samples to improve the recognition performance of S2DP. In the experiments, real radar images are used to verify the performance of our proposed algorithm in dealing with the scarcity of the labeled samples. In addition, our algorithm is compared against several semi-supervised deep learning methods with similar structures. Experimental results demonstrate that the proposed algorithm has better stability than these methods

    Target recognition from multi-domain Radar Range Profile using Multi-input Bidirectional LSTM with HMM

    No full text
    Radars, as active detection sensors, are known to play an important role in various intelligent devices. Target recognition based on high-resolution range profile (HRRP) is an important approach for radars to monitor interesting targets. Traditional recognition algorithms usually rely on a single feature, which makes it difficult to maintain the recognition performance. In this paper, 2-D sequence features from HRRP are extracted in various data domains such as time-frequency domain, time domain, and frequency domain. A novel target identification method is then proposed, by combining bidirectional Long Short-Term Memory (BLSTM) and a Hidden Markov Model (HMM), to learn these multi-domain sequence features. Specifically, we first extract multi-domain HRRP sequences. Next, a new multi-input BLSTM is proposed to learn these multi-domain HRRP sequences, which are then fed to a standard HMM classifier to learn multi-aspect features. Finally, the trained HMM is used to implement the recognition task. Extensive experiments are carried out on the publicly accessible, benchmark MSTAR database. Our proposed algorithm is shown to achieve an identification accuracy of over 91% with a lower false alarm rate and higher identification confidence, compared to several state-of-the-art techniques

    SAR Target Incremental Recognition based on Features with Strong Separability

    No full text
    With the rapid development of deep learning technology, many SAR target recognition algorithms based on convolutional neural networks have achieved exceptional performance on various datasets. However, conventional neural networks are repeatedly iterated on a fixed dataset until convergence, and once they learn new tasks, a large amount of previously learned knowledge is forgotten, leading to a significant decline in performance on old tasks. This paper presents an incremental learning method based on strong separability features (SSF-IL) to address the model’s forgetting of previously learned knowledge. The SSF-IL employs both intra-class and inter-class scatter to compute the feature separability loss, in order to enhance the linear separability of features during incremental learning. In the process of learning new classes, an intra-class clustering loss is proposed to replace the conventional knowledge distillation. This loss function constrains the old class features to cluster around the saved class centers, maintaining the separability among the old class features. Finally, a classifier bias correction method based on boundary features is designed to reinforce the classifier’s decision boundary and reduce classification errors. SAR target incremental recognition experiments are conducted on the MSTAR dataset, and the results are compared with several existing incremental learning algorithms to demonstrate the effectiveness of the proposed algorithm.</p

    A Novel Semi-Supervised Convolutional Neural Network Method for Synthetic Aperture Radar Image Recognition

    Get PDF
    Background / introduction: SAR image automatic target recognition technology (SAR-ATR) is one of the research hotspots in the field of image cognitive learning. Inspired by the human cognitive process, experts have designed convolutional neural networks (CNN) based methods and successfully applied the methods to SAR-ATR. However, the performance of CNNs significantly deteriorates when the labelled samples are insufficient. Methods: To effectively utilize the unlabelled samples, a semi-supervised CNN method is proposed in this paper. First, CNN is used to extract the features of the samples, and subsequently the class probabilities of the unlabelled samples are computed using the softmax function. To improve the effectiveness of the unlabelled samples, we remove possible noise performing thresholding on the class probabilities. Afterwards, based on the remaining class probabilities, the information contained in the unlabelled samples is integrated with the scatter matrices of the standard linear discriminant analysis (LDA) method. The loss function of CNN consists of a supervised component and an unsupervised component, where the supervised component is created using the cross-entropy function and the unsupervised component is created using the scatter matrices. The class probabilities are utilized to control the impact of the unlabelled samples in the training process, and the reliability of the unlabelled samples is further improved. Results: We choose ten types of targets from the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset. The experimental results show that the recognition accuracy of our method is significantly higher than that of the supervised CNN method. Conclusions: It proves that our method can effectively improve the SAR-ATR accuracy despite the deficiency of the labelled samples

    Cross-modality features fusion for synthetic aperture radar image segmentation

    Full text link
    Synthetic  Aperture  Radar  (SAR)  image  segmenta-tion stands as a formidable research frontier within the domainof  SAR  image  interpretation.  The  fully  convolutional  network(FCN) methods have recently brought remarkable improvementsin  SAR  image  segmentation.  Nevertheless,  these  methods  donot  utilize  the  peculiarities  of  SAR  images,  leading  to  subop-timal  segmentation  accuracy.  To  address  this  issue,  we  rethinkSAR  image  segmentation  in  terms  of  sequential  informationof  transformers  and  cross-modal  features.  We  first  discuss  thepeculiarities  of  SAR  images  and  extract  the  mean  and  texturefeatures utilized as auxiliary features. The extraction of auxiliaryfeatures  helps  unearth  the  distinctive  information  in  the  SARimages. Afterward, a feature-enhanced FCN with the transformerencoder  structure,  termed  FE-FCN,  which  can  be  extracted  tocontext-level  and  pixel-level  features.  In  FE-FCN,  the  featuresof   a   single-mode   encoder   are   aligned   and   inserted   into   themodel  to  explore  the  potential  correspondence  between  modes.We  also  employ  long  skip  connections  to  share  each  modality’sdistinguishing  and  particular  features.  Finally,  we  present  theconnection-enhanced conditional random field (CE-CRF) to cap-ture  the  connection  information  of  the  image  pixels.  Since  theCE-CRF utilizes the auxiliary features to enhance the reliabilityof  the  connection  information,  the  segmentation  results  of  FE-FCN are further optimized. Comparative experiments conductedon  the  Fangchenggang  (FCG),  Pucheng  (PC),  and  Gaofen  (GF)SAR  datasets.  Our  method  demonstrates  superior  segmentationaccuracy  compared  to  other  conventional  image  segmentationmethods,  as  confirmed  by  the  experimental  results.</p

    Comparison of gene expression and genome-wide DNA methylation profiling between phenotypically normal cloned pigs and conventionally bred controls

    No full text
    Animal breeding via Somatic Cell Nuclear Transfer (SCNT) has enormous potential in agriculture and biomedicine. However, concerns about whether SCNT animals are as healthy or epigenetically normal as conventionally bred ones are raised as the efficiency of cloning by SCNT is much lower than natural breeding or In-vitro fertilization (IVF). Thus, we have conducted a genome-wide gene expression and DNA methylation profiling between phenotypically normal cloned pigs and control pigs in two tissues (muscle and liver), using Affymetrix Porcine expression array as well as modified methylation-specific digital karyotyping (MMSDK) and Solexa sequencing technology. Typical tissue-specific differences with respect to both gene expression and DNA methylation were observed in muscle and liver from cloned as well as control pigs. Gene expression profiles were highly similar between cloned pigs and controls, though a small set of genes showed altered expression. Cloned pigs presented a more different pattern of DNA methylation in unique sequences in both tissues. Especially a small set of genomic sites had different DNA methylation status with a trend towards slightly increased methylation levels in cloned pigs. Molecular network analysis of the genes that contained such differential methylation loci revealed a significant network related to tissue development. In conclusion, our study showed that phenotypically normal cloned pigs were highly similar with normal breeding pigs in their gene expression, but moderate alteration in DNA methylation aspects still exists, especially in certain unique genomic regions
    corecore